Optimal Stochastic Strongly Convex Optimization with a Logarithmic Number of Projections
نویسندگان
چکیده
We consider stochastic strongly convex optimization with a complex inequality constraint. This complex inequality constraint may lead to computationally expensive projections in algorithmic iterations of the stochastic gradient descent (SGD) methods. To reduce the computation costs pertaining to the projections, we propose an Epoch-Projection Stochastic Gradient Descent (Epro-SGD) method. The proposed Epro-SGD method consists of a sequence of epochs; it applies SGD to an augmented objective function at each iteration within the epoch, and then performs a projection at the end of each epoch. Given a strongly convex optimization and for a total number of T iterations, Epro-SGD requires only log(T ) projections, and meanwhile attains an optimal convergence rate of O(1/T ), both in expectation and with a high probability. To exploit the structure of the optimization problem, we propose a proximal variant of EproSGD, namely Epro-ORDA, based on the optimal regularized dual averaging method. We apply the proposed methods on real-world applications; the empirical results demonstrate the effectiveness of our methods.
منابع مشابه
O(logT) Projections for Stochastic Optimization of Smooth and Strongly Convex Functions
Traditional algorithms for stochastic optimization require projecting the solution at each iteration into a given domain to ensure its feasibility. When facing complex domains, such as the positive semidefinite cone, the projection operation can be expensive, leading to a high computational cost per iteration. In this paper, we present a novel algorithm that aims to reduce the number of project...
متن کاملOptimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization I: A Generic Algorithmic Framework
In this paper we present a generic algorithmic framework, namely, the accelerated stochastic approximation (AC-SA) algorithm, for solving strongly convex stochastic composite optimization (SCO) problems. While the classical stochastic approximation (SA) algorithms are asymptotically optimal for solving differentiable and strongly convex problems, the AC-SA algorithm, when employed with proper s...
متن کاملAccelerate Stochastic Subgradient Method by Leveraging Local Error Bound
In this paper, we propose two accelerated stochastic subgradient methods for stochastic non-strongly convex optimization problems by leveraging a generic local error bound condition. The novelty of the proposed methods lies at smartly leveraging the recent historical solution to tackle the variance in the stochastic subgradient. The key idea of both methods is to iteratively solve the original ...
متن کاملRandom gradient extrapolation for distributed and stochastic optimization
In this paper, we consider a class of finite-sum convex optimization problems defined over a distributed multiagent network with m agents connected to a central server. In particular, the objective function consists of the average of m (≥ 1) smooth components associated with each network agent together with a strongly convex term. Our major contribution is to develop a new randomized incrementa...
متن کاملEfficient Stochastic Gradient Descent for Strongly Convex Optimization
We motivate this study from a recent work on a stochastic gradient descent (SGD) method with only one projection (Mahdavi et al., 2012), which aims at alleviating the computational bottleneck of the standard SGD method in performing the projection at each iteration, and enjoys an O(log T/T ) convergence rate for strongly convex optimization. In this paper, we make further contributions along th...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2016